10 research outputs found

    Model for the Induction of Spike Timing-Dependent Plasticity by Pre- and Postsynaptic Spike Trains

    Get PDF
    Spike timing-dependent plasticity (STDP), a process in which changes in synaptic strength are determined by the relative timing of pre- and postsynaptic activity, has been studied and modeled by a number of researchers, but many questions still remain. It has been suggested that STDP involves a postsynaptic chemical network with stable states corresponding to long term potentiation (LTP) and long term depression (LTD). It is believed that the switching between these states is driven by the postsynaptic Ca2+ concentration, but the manner in which the Ca2+ dynamics is able to trigger either LTP or LTD, depending on the relative timing of pre- and postsynaptic activity remains unclear. We have investigated a model of STDP that combines (1) the tristable chemical network involving CaMKII and PP2A studied by Pi and Lisman [1], with (2) compartmental modeling of backpropagating action potentials (BPAPs), N-methyl D-aspartate receptors (NMDARs), and voltage-dependent calcium channels (VDCCs). In previous work we have studied how this model responds when a presynaptic pulse arrives either shortly before or shortly after a postsynaptic pulse (a BPAP), and shown how this model leads naturally to LTP when the presynaptic pulse arrives first, or LTD when the postsynaptic pulse arrives first, in agreement as found in experimental studies (e.g., [2] and [3]). The response to spike triplets and other more complex pre- and postsynaptic spike trains are also of interest. Experiments [4] have shown that the response to such multispike trains is not simply the sum of the responses to the component spike pairs. For example, the response to a spike triplet consisting of pre-post-presynaptic spikes is often not explained by the simple addition of the responses to a pre-post spike pair followed by a post-pre spike pair. Previous work has proposed only heuristic rules for such multispike responses. In this paper we describe the application of our model of STDP to multispike situations. Our model exhibits a non-additive response similar to that observed by Wang et al. [4], and gives insight into how this non-additivity arises from properties of the CaMKII/PP2A network

    Neurogenesis Deep Learning

    Full text link
    Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing - data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.Comment: 8 pages, 8 figures, Accepted to 2017 International Joint Conference on Neural Networks (IJCNN 2017

    A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines

    Full text link
    Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.Comment: 8 pages, 4 Figures, Preprint of 2017 IJCN

    Modeling spike timing-dependent plasticity

    No full text
    Synaptic strength can be modified by the relative timing of pre and postsynaptic activity, a process termed spike timing-dependent plasticity (STDP). Experiments have shown that these changes can be long lasting and that synapses can be either strengthened (long-term potentiation, LTP) or weakened (long-term depression, LTD). Building on previous modeling work, we have developed a detailed STDP model that uses a biochemical reaction network capable of three stable states: the LTP state, the LTD state, and the basal state (no synaptic modification). Our model is able to explain STDP observed in hippocampal neurons in response to pre and postsynaptic spike pairs and more complex spike combinations. The results give insights into how postsynaptic Ca2+ concentration can lead to LTP or LTD and suggest that voltage-dependent calcium channels (VDCCs) play a key role. The results also show that the model is capable of nonlinear synaptic integration, an important computational property found in neural systems

    Neural correlates of sparse coding and dimensionality reduction.

    No full text
    Supported by recent computational studies, there is increasing evidence that a wide range of neuronal responses can be understood as an emergent property of nonnegative sparse coding (NSC), an efficient population coding scheme based on dimensionality reduction and sparsity constraints. We review evidence that NSC might be employed by sensory areas to efficiently encode external stimulus spaces, by some associative areas to conjunctively represent multiple behaviorally relevant variables, and possibly by the basal ganglia to coordinate movement. In addition, NSC might provide a useful theoretical framework under which to understand the often complex and nonintuitive response properties of neurons in other brain areas. Although NSC might not apply to all brain areas (for example, motor or executive function areas) the success of NSC-based models, especially in sensory areas, warrants further investigation for neural correlates in other regions
    corecore